Learning with Incremental Iterative Regularization
نویسندگان
چکیده
Within a statistical learning setting, we propose and study an iterative regularization algorithm for least squares defined by an incremental gradient method. In particular, we show that, if all other parameters are fixed a priori, the number of passes over the data (epochs) acts as a regularization parameter, and prove strong universal consistency, i.e. almost sure convergence of the risk, as well as sharp finite sample bounds for the iterates. Our results are a step towards understanding the effect of multiple epochs in stochastic gradient techniques in machine learning and rely on integrating statistical and optimization results.
منابع مشابه
Enforcing Local Properties in Online Learning First Order TS-fuzzy Systems by Incremental Regularization
Embedded systems deseminate more and more. Because their complexity increases and their design time has to be reduced, they have to be increasingly equipped with self-tuning properties. One form is self-adaption of the system behavior, which can potentially lead the system into safety critical states. In order to avoid this and to speed up the self-tuning process, we apply a specific form of re...
متن کاملSome Geometrical Bases for Incremental-Iterative Methods (RESEARCH NOTE)
Finding the equilibrium path by non-linear structural analysis is one of the most important subjects in structural engineering. In this way, Incremental-Iterative methods are extremely used. This paper introduces several factors in incremental steps. In addition, it suggests some control criteria for the iterative part of the non-linear analysis. These techniques are based on the geometric of e...
متن کاملPerformance Analysis of Wireless Cooperative Networks with Iterative Incremental Relay Selection
In this paper, an iterative incremental relay selection (IIRS) scheme is considered for wireless cooperative networks in order to increase the reliability of transmission. Different from the conventional incremental relay selection which incrementally selects a best relay for only one iteration; the IIRS scheme iteratively applies the incremental relaying and relay selection processes. To evalu...
متن کاملIterative Regularization for Learning with Convex Loss Functions
We consider the problem of supervised learning with convex loss functions and propose a new form of iterative regularization based on the subgradient method. Unlike other regularization approaches, in iterative regularization no constraint or penalization is considered, and generalization is achieved by (early) stopping an empirical iteration. We consider a nonparametric setting, in the framewo...
متن کاملEfficient Stochastic Optimization for Low-Rank Distance Metric Learning
Although distance metric learning has been successfully applied to many real-world applications, learning a distance metric from large-scale and high-dimensional data remains a challenging problem. Due to the PSD constraint, the computational complexity of previous algorithms per iteration is at least O(d) where d is the dimensionality of the data. In this paper, we develop an efficient stochas...
متن کامل